principal vector
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.15)
- Europe > Austria (0.04)
- North America > Canada (0.04)
- (3 more...)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- Asia > Middle East > Jordan (0.04)
Generalized Relevance Learning Grassmann Quantization
Mohammadi, M., Babai, M., Wilkinson, M. H. F.
Due to advancements in digital cameras, it is easy to gather multiple images (or videos) from an object under different conditions. Therefore, image-set classification has attracted more attention, and different solutions were proposed to model them. A popular way to model image sets is subspaces, which form a manifold called the Grassmann manifold. In this contribution, we extend the application of Generalized Relevance Learning Vector Quantization to deal with Grassmann manifold. The proposed model returns a set of prototype subspaces and a relevance vector. While prototypes model typical behaviours within classes, the relevance factors specify the most discriminative principal vectors (or images) for the classification task. They both provide insights into the model's decisions by highlighting influential images and pixels for predictions. Moreover, due to learning prototypes, the model complexity of the new method during inference is independent of dataset size, unlike previous works. We applied it to several recognition tasks including handwritten digit recognition, face recognition, activity recognition, and object recognition. Experiments demonstrate that it outperforms previous works with lower complexity and can successfully model the variation, such as handwritten style or lighting conditions. Moreover, the presence of relevances makes the model robust to the selection of subspaces' dimensionality.
Scientists make highly maneuverable miniature robots controlled by magnetic fields
The research team created the miniature robots by embedding magnetic microparticles into biocompatible polymers -- non-toxic materials that are harmless to humans. The robots are'programmed' to execute their desired functionalities when magnetic fields are applied. The made-in-NTU robots improve on many existing small-scale robots by optimizing their ability to move in six degrees-of-freedom (DoF) -- that is, translational movement along the three spatial axes, and rotational movement about those three axes, commonly known as roll, pitch and yaw angles. While researchers have previously created six DoF miniature robots, the new NTU miniature robots can rotate 43 times faster than them in the critical sixth DoF when their orientation is precisely controlled. They can also be made with'soft' materials and thus can replicate important mechanical qualities -- one type can'swim' like a jellyfish, and another has a gripping ability that can precisely pick and place miniature objects.
An Online Riemannian PCA for Stochastic Canonical Correlation Analysis
Meng, Zihang, Chakraborty, Rudrasis, Singh, Vikas
We present an efficient stochastic algorithm (RSG+) for canonical correlation analysis (CCA) using a reparametrization of the projection matrices. We show how this reparametrization (into structured matrices), simple in hindsight, directly presents an opportunity to repurpose/adjust mature techniques for numerical optimization on Riemannian manifolds. Our developments nicely complement existing methods for this problem which either require $O(d^3)$ time complexity per iteration with $O(\frac{1}{\sqrt{t}})$ convergence rate (where $d$ is the dimensionality) or only extract the top $1$ component with $O(\frac{1}{t})$ convergence rate. In contrast, our algorithm offers a strict improvement for this classical problem: it achieves $O(d^2k)$ runtime complexity per iteration for extracting the top $k$ canonical components with $O(\frac{1}{t})$ convergence rate. While the paper primarily focuses on the formulation and technical analysis of its properties, our experiments show that the empirical behavior on common datasets is quite promising. We also explore a potential application in training fair models where the label of protected attribute is missing or otherwise unavailable.
- North America > United States > Texas > Dallas County > Richardson (0.04)
- Asia > Pakistan (0.04)
A Generalization of Principal Component Analysis
Battaglino, Samuele, Koyuncu, Erdem
Samuele Battaglino and Erdem Koyuncu † Abstract --Conventional principal component analysis (PCA) finds a principal vector that maximizes the sum of second powers of principal components. We consider a generalized PCA that aims at maximizing the sum of an arbitrary convex function of principal components. We present a gradient ascent algorithm to solve the problem. For the kernel version of generalized PCA, we show that the solutions can be obtained as fixed points of a simple single-layer recurrent neural network. We also evaluate our algorithms on different datasets. I NTRODUCTION A. Conventional Principal Component Analysis (PCA) PCA and variant methods are dimension reduction techniques that rely on orthogonal transformations [1]-[3].
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Germany > Berlin (0.04)
Greedy Feature Selection for Subspace Clustering
Dyer, Eva L., Sankaranarayanan, Aswin C., Baraniuk, Richard G.
Unions of subspaces provide a powerful generalization to linear subspace models for collections of high-dimensional data. To learn a union of subspaces from a collection of data, sets of signals in the collection that belong to the same subspace must be identified in order to obtain accurate estimates of the subspace structures present in the data. Recently, sparse recovery methods have been shown to provide a provable and robust strategy for exact feature selection (EFS)--recovering subsets of points from the ensemble that live in the same subspace. In parallel with recent studies of EFS with L1-minimization, in this paper, we develop sufficient conditions for EFS with a greedy method for sparse signal recovery known as orthogonal matching pursuit (OMP). Following our analysis, we provide an empirical study of feature selection strategies for signals living on unions of subspaces and characterize the gap between sparse recovery methods and nearest neighbor (NN)-based approaches. In particular, we demonstrate that sparse recovery methods provide significant advantages over NN methods and the gap between the two approaches is particularly pronounced when the sampling of subspaces in the dataset is sparse. Our results suggest that OMP may be employed to reliably recover exact feature sets in a number of regimes where NN approaches fail to reveal the subspace membership of points in the ensemble.
- North America > United States > Texas (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
Nonnegative Sparse PCA
We describe a nonnegative variant of the "Sparse PCA" problem. The goal is to create a low dimensional representation from a collection of points which on the one hand maximizes the variance of the projected points and on the other uses only parts of the original coordinates, and thereby creating a sparse representation. What distinguishes our problem from other Sparse PCA formulations is that the projection involves only nonnegative weights of the original coordinates -- a desired quality in various fields, including economics, bioinformatics and computer vision. Adding nonnegativity contributes to sparseness, where it enforces a partitioning of the original coordinates among the new axes. We describe a simple yet efficient iterative coordinate-descent type of scheme which converges to a local optimum of our optimization criteria, giving good results on large real world datasets.
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.05)
- North America > United States (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Asia > Middle East > Jordan (0.04)